在驾驶的背景下进行警觉性监控可改善安全性并挽救生命。基于计算机视觉的警报监视是一个活跃的研究领域。但是,存在警觉性监控的算法和数据集主要针对年轻人(18-50岁)。我们提出了一个针对老年人进行车辆警报监控的系统。通过设计研究,我们确定了适合在5级车辆中独立旅行的老年人的变量和参数。我们实施了一个原型旅行者监测系统,并评估了十个老年人(70岁及以上)的警报检测算法。我们以适合初学者或从业者的详细级别报告系统设计和实施。我们的研究表明,数据集的开发是开发针对老年人的警觉性监测系统的首要挑战。这项研究是迄今为止研究不足的人群中的第一项研究,并通过参与方法对未来的算法开发和系统设计具有影响。
translated by 谷歌翻译
Regularising the parameter matrices of neural networks is ubiquitous in training deep models. Typical regularisation approaches suggest initialising weights using small random values, and to penalise weights to promote sparsity. However, these widely used techniques may be less effective in certain scenarios. Here, we study the Koopman autoencoder model which includes an encoder, a Koopman operator layer, and a decoder. These models have been designed and dedicated to tackle physics-related problems with interpretable dynamics and an ability to incorporate physics-related constraints. However, the majority of existing work employs standard regularisation practices. In our work, we take a step toward augmenting Koopman autoencoders with initialisation and penalty schemes tailored for physics-related settings. Specifically, we propose the "eigeninit" initialisation scheme that samples initial Koopman operators from specific eigenvalue distributions. In addition, we suggest the "eigenloss" penalty scheme that penalises the eigenvalues of the Koopman operator during training. We demonstrate the utility of these schemes on two synthetic data sets: a driven pendulum and flow past a cylinder; and two real-world problems: ocean surface temperatures and cyclone wind fields. We find on these datasets that eigenloss and eigeninit improves the convergence rate by up to a factor of 5, and that they reduce the cumulative long-term prediction error by up to a factor of 3. Such a finding points to the utility of incorporating similar schemes as an inductive bias in other physics-related deep learning approaches.
translated by 谷歌翻译
Plastic shopping bags that get carried away from the side of roads and tangled on cotton plants can end up at cotton gins if not removed before the harvest. Such bags may not only cause problem in the ginning process but might also get embodied in cotton fibers reducing its quality and marketable value. Therefore, it is required to detect, locate, and remove the bags before cotton is harvested. Manually detecting and locating these bags in cotton fields is labor intensive, time-consuming and a costly process. To solve these challenges, we present application of four variants of YOLOv5 (YOLOv5s, YOLOv5m, YOLOv5l and YOLOv5x) for detecting plastic shopping bags using Unmanned Aircraft Systems (UAS)-acquired RGB (Red, Green, and Blue) images. We also show fixed effect model tests of color of plastic bags as well as YOLOv5-variant on average precision (AP), mean average precision (mAP@50) and accuracy. In addition, we also demonstrate the effect of height of plastic bags on the detection accuracy. It was found that color of bags had significant effect (p < 0.001) on accuracy across all the four variants while it did not show any significant effect on the AP with YOLOv5m (p = 0.10) and YOLOv5x (p = 0.35) at 95% confidence level. Similarly, YOLOv5-variant did not show any significant effect on the AP (p = 0.11) and accuracy (p = 0.73) of white bags, but it had significant effects on the AP (p = 0.03) and accuracy (p = 0.02) of brown bags including on the mAP@50 (p = 0.01) and inference speed (p < 0.0001). Additionally, height of plastic bags had significant effect (p < 0.0001) on overall detection accuracy. The findings reported in this paper can be useful in speeding up removal of plastic bags from cotton fields before harvest and thereby reducing the amount of contaminants that end up at cotton gins.
translated by 谷歌翻译
Computational catalysis is playing an increasingly significant role in the design of catalysts across a wide range of applications. A common task for many computational methods is the need to accurately compute the minimum binding energy - the adsorption energy - for an adsorbate and a catalyst surface of interest. Traditionally, the identification of low energy adsorbate-surface configurations relies on heuristic methods and researcher intuition. As the desire to perform high-throughput screening increases, it becomes challenging to use heuristics and intuition alone. In this paper, we demonstrate machine learning potentials can be leveraged to identify low energy adsorbate-surface configurations more accurately and efficiently. Our algorithm provides a spectrum of trade-offs between accuracy and efficiency, with one balanced option finding the lowest energy configuration, within a 0.1 eV threshold, 86.63% of the time, while achieving a 1387x speedup in computation. To standardize benchmarking, we introduce the Open Catalyst Dense dataset containing nearly 1,000 diverse surfaces and 87,045 unique configurations.
translated by 谷歌翻译
Data-driven modeling approaches such as jump tables are promising techniques to model populations of resistive random-access memory (ReRAM) or other emerging memory devices for hardware neural network simulations. As these tables rely on data interpolation, this work explores the open questions about their fidelity in relation to the stochastic device behavior they model. We study how various jump table device models impact the attained network performance estimates, a concept we define as modeling bias. Two methods of jump table device modeling, binning and Optuna-optimized binning, are explored using synthetic data with known distributions for benchmarking purposes, as well as experimental data obtained from TiOx ReRAM devices. Results on a multi-layer perceptron trained on MNIST show that device models based on binning can behave unpredictably particularly at low number of points in the device dataset, sometimes over-promising, sometimes under-promising target network accuracy. This paper also proposes device level metrics that indicate similar trends with the modeling bias metric at the network level. The proposed approach opens the possibility for future investigations into statistical device models with better performance, as well as experimentally verified modeling bias in different in-memory computing and neural network architectures.
translated by 谷歌翻译
我们通过查看在弥漫表面上铸造的对象的阴影来研究个体的生物特征识别信息的问题。我们表明,通过最大似然分析,在代表性的情况下,阴影中的生物特征信息泄漏可以足够用于可靠的身份推断。然后,我们开发了一种基于学习的方法,该方法在实际设置中证明了这种现象,从而利用阴影中的微妙提示是泄漏的来源,而无需任何标记的真实数据。特别是,我们的方法依赖于构建由从每个身份的单个照片获得的3D面模型组成的合成场景。我们以完全无监督的方式将我们从合成数据中学到的知识转移到真实数据中。我们的模型能够很好地概括到真实的域,并且在场景中的几种变体都有坚固的范围。我们报告在具有未知几何形状和遮挡对象的场景中发生的身份分类任务中的高分类精度。
translated by 谷歌翻译
地震阶段关联将地震到达时间测量连接到其致病来源。有效的关联必须确定离散事件的数量,其位置和起源时间,并且必须将实际到达与测量工件区分开。深度学习采摘者的出现,从紧密重叠的小地震中提供了高率的速度,它激发了重新审视相关问题并使用深度学习方法来解决它。我们已经开发了一个图形神经网络关联器,该协会同时预测源时空定位和离散的源源 - 边界关联可能性。该方法适用于任意几何形状,数百个电台的时变地震网络,并且具有可变噪声和质量的高源和输入选拔速率。我们的图形地震神经解释引擎(Genie)使用一个图来表示站点,另一个图表示空间源区域。 Genie从数据中从数据中学习了关系,使其能够确定可靠的源和源源联想。我们使用Phasenet Deep Learth Learning Phase Phase Picker生成的输入来培训合成数据,并测试来自北加州(NC)地震网络的真实数据的方法。我们成功地重新检测了USGS在2000年$ \ unicode {x2013} $ 2022之间的500天报告中报告的所有事件M> 1的96%。在2017年的100天连续处理间隔中,$ \ unicode {x2013} $ 2018,我们检测到〜4.2x USGS报告的事件数量。我们的新事件的估计值低于USGS目录的完整性幅度,并且位于该地区的活动故障和采石场附近。我们的结果表明,精灵可以在复杂的地震监测条件下有效解决关联问题。
translated by 谷歌翻译
现有的数据驱动和反馈流量控制策略不考虑实时数据测量的异质性。此外,对于缺乏数据效率,传统的加固学习方法(RL)方法通常会缓慢收敛。此外,常规的最佳外围控制方案需要对系统动力学的精确了解,因此对内源性不确定性会很脆弱。为了应对这些挑战,这项工作提出了一种基于不可或缺的增强学习(IRL)的方法来学习宏观交通动态,以进行自适应最佳周边控制。这项工作为运输文献做出了以下主要贡献:(a)开发连续的时间控制,并具有离散增益更新以适应离散时间传感器数据。 (b)为了降低采样复杂性并更有效地使用可用数据,将体验重播(ER)技术引入IRL算法。 (c)所提出的方法以“无模型”方式放松模型校准的要求,该方式可以稳健地进行建模不确定性,并通过数据驱动的RL算法增强实时性能。 (d)通过Lyapunov理论证明了基于IRL的算法和受控交通动力学的稳定性的收敛性。最佳控制定律被参数化,然后通过神经网络(NN)近似,从而缓解计算复杂性。在不需要模型线性化的同时,考虑了状态和输入约束。提出了数值示例和仿真实验,以验证所提出方法的有效性和效率。
translated by 谷歌翻译
我们研究了数据驱动的深度学习方法的潜力,即从观察它们的混合物中分离两个通信信号。特别是,我们假设一个信号之一的生成过程(称为感兴趣的信号(SOI)),并且对第二个信号的生成过程不了解,称为干扰。单通道源分离问题的这种形式也称为干扰拒绝。我们表明,捕获高分辨率的时间结构(非平稳性),可以准确地同步与SOI和干扰,从而带来了可观的性能增长。有了这个关键的见解,我们提出了一种域信息神经网络(NN)设计,该设计能够改善“现成” NNS和经典检测和干扰拒绝方法,如我们的模拟中所示。我们的发现突出了特定于交流领域知识的关键作用在开发数据驱动的方法方面发挥了作用,这些方法具有前所未有的收益的希望。
translated by 谷歌翻译
在不失去先前学习的情况下学习新任务和技能(即灾难性遗忘)是人为和生物神经网络的计算挑战,但是人工系统努力与其生物学类似物达成平等。哺乳动物的大脑采用众多神经手术来支持睡眠期间的持续学习。这些是人工适应的成熟。在这里,我们研究了建模哺乳动物睡眠的三个不同组成部分如何影响人工神经网络中的持续学习:(1)在非比型眼运动(NREM)睡眠期间观察到的垂直记忆重播过程; (2)链接到REM睡眠的生成记忆重播过程; (3)已提出的突触降压过程,以调整信噪比和支持神经保养。在评估持续学习CIFAR-100图像分类基准上的性能时,我们发现将所有三个睡眠组件的包含在内。在以后的任务期间,训练和灾难性遗忘在训练过程中提高了最高准确性。尽管某些灾难性遗忘在网络培训过程中持续存在,但更高水平的突触缩减水平会导致更好地保留早期任务,并进一步促进随后培训期间早期任务准确性的恢复。一个关键的要点是,在考虑使用突触缩小范围的水平时,手头有一个权衡 - 更具侵略性的缩减更好地保护早期任务,但较少的缩减可以增强学习新任务的能力。中级水平可以在训练过程中与最高的总体精度达到平衡。总体而言,我们的结果都提供了有关如何适应睡眠组件以增强人工连续学习系统的洞察力,并突出了未来神经科学睡眠研究的领域,以进一步进一步进行此类系统。
translated by 谷歌翻译